186 research outputs found
A Tempt to Unify Heterogeneous Driving Databases using Traffic Primitives
A multitude of publicly-available driving datasets and data platforms have
been raised for autonomous vehicles (AV). However, the heterogeneities of
databases in size, structure and driving context make existing datasets
practically ineffective due to a lack of uniform frameworks and searchable
indexes. In order to overcome these limitations on existing public datasets,
this paper proposes a data unification framework based on traffic primitives
with ability to automatically unify and label heterogeneous traffic data. This
is achieved by two steps: 1) Carefully arrange raw multidimensional time series
driving data into a relational database and then 2) automatically extract
labeled and indexed traffic primitives from traffic data through a Bayesian
nonparametric learning method. Finally, we evaluate the effectiveness of our
developed framework using the collected real vehicle data.Comment: 6 pages, 7 figures, 1 table, ITSC 201
Spatiotemporal Learning of Multivehicle Interaction Patterns in Lane-Change Scenarios
Interpretation of common-yet-challenging interaction scenarios can benefit
well-founded decisions for autonomous vehicles. Previous research achieved this
using their prior knowledge of specific scenarios with predefined models,
limiting their adaptive capabilities. This paper describes a Bayesian
nonparametric approach that leverages continuous (i.e., Gaussian processes) and
discrete (i.e., Dirichlet processes) stochastic processes to reveal underlying
interaction patterns of the ego vehicle with other nearby vehicles. Our model
relaxes dependency on the number of surrounding vehicles by developing an
acceleration-sensitive velocity field based on Gaussian processes. The
experiment results demonstrate that the velocity field can represent the
spatial interactions between the ego vehicle and its surroundings. Then, a
discrete Bayesian nonparametric model, integrating Dirichlet processes and
hidden Markov models, is developed to learn the interaction patterns over the
temporal space by segmenting and clustering the sequential interaction data
into interpretable granular patterns automatically. We then evaluate our
approach in the highway lane-change scenarios using the highD dataset collected
from real-world settings. Results demonstrate that our proposed Bayesian
nonparametric approach provides an insight into the complicated lane-change
interactions of the ego vehicle with multiple surrounding traffic participants
based on the interpretable interaction patterns and their transition properties
in temporal relationships. Our proposed approach sheds light on efficiently
analyzing other kinds of multi-agent interactions, such as vehicle-pedestrian
interactions. View the demos via https://youtu.be/z_vf9UHtdAM.Comment: for the supplements, see
https://chengyuan-zhang.github.io/Multivehicle-Interaction
A General Framework of Learning Multi-Vehicle Interaction Patterns from Videos
Semantic learning and understanding of multi-vehicle interaction patterns in
a cluttered driving environment are essential but challenging for autonomous
vehicles to make proper decisions. This paper presents a general framework to
gain insights into intricate multi-vehicle interaction patterns from bird's-eye
view traffic videos. We adopt a Gaussian velocity field to describe the
time-varying multi-vehicle interaction behaviors and then use deep autoencoders
to learn associated latent representations for each temporal frame. Then, we
utilize a hidden semi-Markov model with a hierarchical Dirichlet process as a
prior to segment these sequential representations into granular components,
also called traffic primitives, corresponding to interaction patterns.
Experimental results demonstrate that our proposed framework can extract
traffic primitives from videos, thus providing a semantic way to analyze
multi-vehicle interaction patterns, even for cluttered driving scenarios that
are far messier than human beings can cope with.Comment: 2019 IEEE Intelligent Transportation Systems Conference (ITSC
Co2N nanoparticles embedded N-doped mesoporous carbon as efficient electrocatalysts for oxygen reduction reaction
Co-N-C electrocatalysts have attracted great attention in electrocatalytic ORR (oxygen reduction reaction) field. In this work, we propose to prepare Co 2 N nanoparticles embedded N-doped mesoporous carbon by a facile method including in situ copolymerization and pyrolysis under NH 3 atmosphere. The results show that more N atoms can be doped in carbon framework by NH 3 pyrolysis, it is also found that pyrolysis temperature and Co content can influence the ORR performance of samples. The sample prepared by adding Co precursor and pyrolysis at 700 °C has high N content (11.86 at.%) and relative large specific surface area (362 m 2 g −1 ), and it also exhibited superior electrocatalytic ORR performance in terms of E onset (−0.038 V vs. SCE), E 1/2 (−0.126 V vs. SCE) and large current density (5.22 mA cm −2 ). Additionally, the sample also shows better stability and resistance to methanol poisoning than Pt/C catalyst. The synergistic effect of Co-N active centers and hierarchical porous structures contribute the excellent electrocatalytic activity, which are considering as alternative catalysts for ORR in full cells
Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations
In recent years, computer vision has made remarkable advancements in
autonomous driving and robotics. However, it has been observed that deep
learning-based visual perception models lack robustness when faced with camera
motion perturbations. The current certification process for assessing
robustness is costly and time-consuming due to the extensive number of image
projections required for Monte Carlo sampling in the 3D camera motion space. To
address these challenges, we present a novel, efficient, and practical
framework for certifying the robustness of 3D-2D projective transformations
against camera motion perturbations. Our approach leverages a smoothing
distribution over the 2D pixel space instead of in the 3D physical space,
eliminating the need for costly camera motion sampling and significantly
enhancing the efficiency of robustness certifications. With the pixel-wise
smoothed classifier, we are able to fully upper bound the projection errors
using a technique of uniform partitioning in camera motion space. Additionally,
we extend our certification framework to a more general scenario where only a
single-frame point cloud is required in the projection oracle. This is achieved
by deriving Lipschitz-based approximated partition intervals. Through extensive
experimentation, we validate the trade-off between effectiveness and efficiency
enabled by our proposed method. Remarkably, our approach achieves approximately
80% certified accuracy while utilizing only 30% of the projected image frames.Comment: 32 pages, 5 figures, 13 table
- …